21 research outputs found

    An Ensemble Method to Automatically Grade Diabetic Retinopathy with Optical Coherence Tomography Angiography Images

    Full text link
    Diabetic retinopathy (DR) is a complication of diabetes, and one of the major causes of vision impairment in the global population. As the early-stage manifestation of DR is usually very mild and hard to detect, an accurate diagnosis via eye-screening is clinically important to prevent vision loss at later stages. In this work, we propose an ensemble method to automatically grade DR using ultra-wide optical coherence tomography angiography (UW-OCTA) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022. First, we adopt the state-of-the-art classification networks, i.e., ResNet, DenseNet, EfficientNet, and VGG, and train them to grade UW-OCTA images with different splits of the available dataset. Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions. During the training process, we also investigate the multi-task learning strategy, and add an auxiliary classification task, the Image Quality Assessment, to improve the model performance. Our final ensemble model achieved a quadratic weighted kappa (QWK) of 0.9346 and an Area Under Curve (AUC) of 0.9766 on the internal testing dataset, and the QWK of 0.839 and the AUC of 0.8978 on the DRAC challenge testing dataset.Comment: 13 pages, 6 figures, 5 tables. To appear in Diabetic Retinopathy Analysis Challenge (DRAC), Bin Sheng et al., MICCAI 2022 Challenge, Lecture Notes in Computer Science, Springe

    Autopet Challenge 2023: nnUNet-based whole-body 3D PET-CT Tumour Segmentation

    Full text link
    Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) combined with Computed Tomography (CT) scans are critical in oncology to the identification of solid tumours and the monitoring of their progression. However, precise and consistent lesion segmentation remains challenging, as manual segmentation is time-consuming and subject to intra- and inter-observer variability. Despite their promise, automated segmentation methods often struggle with false positive segmentation of regions of healthy metabolic activity, particularly when presented with such a complex range of tumours across the whole body. In this paper, we explore the application of the nnUNet to tumour segmentation of whole-body PET-CT scans and conduct different experiments on optimal training and post-processing strategies. Our best model obtains a Dice score of 69\% and a false negative and false positive volume of 6.27 and 5.78 mL respectively, on our internal test set. This model is submitted as part of the autoPET 2023 challenge. Our code is available at: https://github.com/anissa218/autopet\_nnune

    Accurate volume alignment of arbitrarily oriented tibiae based on a mutual attention network for osteoarthritis analysis

    Get PDF
    Damage to cartilage is an important indicator of osteoarthritis progression, but manual extraction of cartilage morphology is time-consuming and prone to error. To address this, we hypothesize that automatic labeling of cartilage can be achieved through the comparison of contrasted and non-contrasted Computer Tomography (CT). However, this is non-trivial as the pre-clinical volumes are at arbitrary starting poses due to the lack of standardized acquisition protocols. Thus, we propose an annotation-free deep learning method, D-net, for accurate and automatic alignment of pre- and post-contrasted cartilage CT volumes. D-Net is based on a novel mutual attention network structure to capture large-range translation and full-range rotation without the need for a prior pose template. CT volumes of mice tibiae are used for validation, with synthetic transformation for training and tested with real pre- and post-contrasted CT volumes. Analysis of Variance (ANOVA) was used to compare the different network structures. Our proposed method, D-net, achieves a Dice coefficient of 0.87, and significantly outperforms other state-of-the-art deep learning models, in the real-world alignment of 50 pairs of pre- and post-contrasted CT volumes when cascaded as a multi-stage network

    Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years

    Get PDF
    Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1 . We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2 , selected using World Health Organization recommendations for growth standards3 . Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5 . The atlas was produced using 1,059 optimal quality, three dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6–8 . The atlas corresponds structurally to published magnetic resonance images9 , but with finer anatomical details in deep grey matter. The between study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks’ gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks’ gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment

    Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation

    Get PDF
    Rectal tumour segmentation in dynamic contrast-enhanced MRI (DCE-MRI) is a challenging task, and an automated and consistent method would be highly desirable to improve the modelling and prediction of patient outcomes from tissue contrast enhancement characteristics – particularly in routine clinical practice. A framework is developed to automate DCE-MRI tumour segmentation, by introducing: perfusion-supervoxels to over-segment and classify DCE-MRI volumes using the dynamic contrast enhancement characteristics; and the pieces-of-parts graphical model, which adds global (anatomic) constraints that further refine the supervoxel components that comprise the tumour. The framework was evaluated on 23 DCE-MRI scans of patients with rectal adenocarcinomas, and achieved a voxelwise area-under the receiver operating characteristic curve (AUC) of 0.97 compared to expert delineations. Creating a binary tumour segmentation, 21 of the 23 cases were segmented correctly with a median Dice similarity coefficient (DSC) of 0.63, which is close to the inter-rater variability of this challenging task. A second study is also included to demonstrate the method’s generalisability and achieved a DSC of 0.71. The framework achieves promising results for the underexplored area of rectal tumour segmentation in DCE-MRI, and the methods have potential to be applied to other DCE-MRI and supervoxel segmentation problems

    A level-set approach to joint image segmentation and registration with application to CT lung imaging

    Get PDF
    Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately

    Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention

    No full text
    Segmentation of head and neck (H&N) tumours and prediction of patient outcome are crucial for patient’s disease diagnosis and treatment monitoring. Current developments of robust deep learning models are hindered by the lack of large multi-centre, multi-modal data with quality annotations. The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods of the primary gross target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images and prediction of progression-free survival in H&N oropharyngeal cancer. For the segmentation task, we proposed a new network based on an encoder-decoder architecture with full inter- and intra-skip connections to take advantage of low-level and high-level semantics at full scales. Additionally, we used Conditional Random Fields as a post-processing step to refine the predicted segmentation maps. We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge testing data set. For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features. Our survival prediction model achieved a concordance index of 0.82 in cross-validation, and 0.62 on the challenge testing data se

    BEAN: brain extraction and alignment network for 3D fetal neurosonography

    No full text
    Brain extraction (masking of extra-cranial tissue) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwe

    Patch-based lung ventilation estimation using multi-layer supervoxels

    Get PDF
    Patch-based approaches have received substantial attention over the recent years in medical imaging. One of their potential applications may be to provide more anatomically consistent ventilation maps estimated on dynamic lung CT. An assessment of regional lung function may act as a guide for radiotherapy, ensuring a more accurate treatment plan. This in turn, could spare well-functioning parts of the lungs. We present a novel method for lung ventilation estimation from dynamic lung CT imaging, combining a supervoxel-based image representation with deformations estimated during deformable image registration, performed between peak breathing phases. For this we propose a method that tracks changes of the intensity of previously extracted supervoxels. For the evaluation of the method we calculate correlation of the estimated ventilation maps with static ventilation images acquired from hyperpolarized Xenon129 MRI. We also investigate the influence of different image registration methods used to estimate deformations between the peak breathing phases in the dynamic CT imaging. We show that our method performs favorably to other ventilation estimation methods commonly used in the field, independently of the image registration method applied to dynamic CT. Due to the patch-based approach of our method, it may be physiologically more consistent with lung anatomy than previous methods relying on voxel-wise relationships. In our method the ventilation is estimated for supervoxels, which tend to group spatially close voxels with similar intensity values. The proposed method was evaluated on a dataset consisting of three lung cancer patients undergoing radiotherapy treatment, and this resulted in a correlation of 0.485 with XeMRI ventilation images, compared with 0.393 for the intensity-based approach, 0.231 for the Jacobian-based method and 0.386 for the Hounsfield units averaging method, on average. Within the limitation of the small number of cases analyzed, results suggest that the presented technique may be advantageous for CT-based ventilation estimation. The results showing higher values of correlation of the proposed method demonstrate the potential of our method to more accurately mimic the lung physiology
    corecore